11 research outputs found

    A scalable technique for characterizing the usage of temporaries in framework-intensive Java applications

    Full text link
    Framework-intensive applications (e.g., Web applications) heavily use temporary data structures, often resulting in performance bot-tlenecks. This paper presents an optimized blended escape analysis to approximate object lifetimes and thus, to identify these tempo-raries and their uses. Empirical results show that this optimized analysis on average prunes 37 % of the basic blocks in our bench-marks, and achieves a speedup of up to 29 times compared to the original analysis. Newly defined metrics quantify key properties of temporary data structures and their uses. A detailed empirical eval-uation offers the first characterization of temporaries in framework-intensive applications. The results show that temporary data struc-tures can include up to 12 distinct object types and can traverse through as many as 14 method invocations before being captured

    Visualizing reference patterns for solving memory leaks in Java

    No full text
    Abstract. Many Java programmers believe they do not have to worry about memory management because of automatic garbage collection. In fact, many Java programs run out of memory unexpectedly after performing a number of operations. A memory leak in Java is caused when an object that is no longer needed cannot be reclaimed because another object is still referring to it. Memory leaks can be difficult to solve, since the complexity of most programs prevents us from manually verifying the validity of every reference. In this paper we show a new methodology for finding the causes of memory leaks. We have identified a basic memory leak scenario which fits many important cases. In this scenario, we allow the programmer to identify a period of time in which temporary objects are expected to be created and released. Using this information we are able to identify objects that persist beyond this period and the references which are holding on to them. Scaling this methodology to real-world systems brings additional challenges. We propose a novel combination of visual syntax and reference pattern extraction to manage this additional complexity. We also describe how these techniques can be applied to a wider class of memory problems, including the exploration of large data structures. These techniques have been implemented and have been proven successful on large projects.

    An Information Exploration Tool for Performance Analysis of Java Programs

    No full text
    The diagnosis of performance and memory problems can require the analysis of large and complex data sets describing a program’s execution. An analysis tool must help the user both find the right organization of the data to uncover useful information, and work with the data through a lengthy and unpredicatable discovery process. In this paper we present Jinsight EX, a tool for analyzing Java performance, that adopts techniques that have been successfully used to explore large data sets in other application domains, and adapts them specifically to the needs of program execution analysis. We introduce execution slices, a high-level organizing abstraction that the user may define and then easily reuse in various settings. We illustrate techniques that allow the user to perform a range of common analysis tasks and to structure a longer analysis process, using this abstraction. We present the tool, its implementation and initial experience of its use. 1

    Finding low-utility data structures

    No full text
    Many opportunities for easy, big-win, program optimizations are missed by compilers. This is especially true in highly layered Java applications. Often at the heart of these missed optimization opportunities lie computations that, with great expense, produce data values that have little impact on the program’s final output. Constructing a new date formatter to format every date, or populating a large set full of expensively constructed structures only to check its size: these involve costs that are out of line with the benefits gained. This disparity between the formation costs and accrued benefits of data structures is at the heart of much runtime bloat. We introduce a run-time analysis to discover these low-utility data structures. The analysis employs dynamic thin slicing, which naturally associates costs with value flows rather than raw data flows. It constructs a model of the incremental, hop-to-hop, cost

    Go with the flow

    No full text

    The causes of bloat, the limits of health

    No full text
    corecore